Open
Conversation
…_mle_zero_padding
…/ceno into feat/prover_mle_zero_padding
…_mle_zero_padding
Build batched main sumcheck virtual polynomials directly from monomial terms instead of reconstructing a large Expression tree and monomializing it again. This removes expensive expression rebuild work on CPU proof generation while preserving proof semantics. Also extend integration timeout to allow the existing slow batched proof path to complete after increasing stack size.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
Main sumcheck was proved and verified per chip, which duplicated transcript work, selector/claim handling, and PCS opening plumbing across chips.
Design Rationale
Use one global batched main sumcheck proof while keeping PCS openings in the existing suffix path. The verifier mirrors the prover transcript order, including ECC bridge sampling before the global
combine subset evalschallenge, and evaluates frontloaded expressions in the verifier.Change Highlights
ceno_zkvm: batches main constraints into a single global proof path across chip proofs.ceno_zkvm: keeps witness/fixed PCS openings per chip after global main verification.ceno_recursion: mirrors native verifier changes for the batched main proof.ceno-gpu: supports the batched main proving flow.Benchmark / Performance Impact
Benchmark session compares the frontload baseline against successive
feat/batch_main_sumcheckoptimization runs on block23817600, GPU proving,CENO_GPU_ENABLE_WITGEN=0.Comparison convention: lower time is better. Signed
xvalues use-Nxfor slower-than-baseline wall time and+Nxfor faster/lower-time metrics; for example, taking twice as long is-2.00x.Timeline / Optimization Progress
7a07649b, GPU1118dca8dd229c00, GPU340651b4d5ae1b3a, GPUfbef26f3c2c45cc9, GPU3dedbc78E2E / Layer
App Prove Breakdown
Profiler module totals can overlap because chip proving is concurrent; use
app_prove wall timeabove for critical-path impact. The latest run materially reduces the new batched-main cost, but total wall time is still slower than the frontload baseline.Latest Improvement Against Previous Batched Run
Benchmark command:
Environment:
cc=8.9,24GBGPU memory.nightly-2025-11-20, cargo1.93.0-nightly.7a07649b, GPU1118dca8, summary.dd229c00, GPU340651b4, summary.d5ae1b3a, GPUfbef26f3, summary.c2c45cc9, GPU3dedbc78, summary.Summary: latest optimization improves
prove_batched_main_constraintsby+1.74xagainst the previous batched-main run (26.925s -> 15.457s) and improves E2E by+1.13x(104.000s -> 91.800s). It remains slower than the frontload baseline (75.600s -> 91.800s,-1.21x), with the remaining gap concentrated in the new batched-main critical path.Testing
Also passed the linked GPU e2e benchmark run above.
Risks and Rollout
Follow-ups
prove_batched_main_constraintscritical-path cost.Copilot Reviewer Directive (keep this section)
When Copilot reviews this PR, apply
.github/copilot-instructions.mdstrictly.